Goto

Collaborating Authors

 data partition


Proximal SCOPE for Distributed Sparse Learning

Neural Information Processing Systems

Distributed sparse learning with a cluster of multiple machines has attracted much attention in machine learning, especially for large-scale applications with high-dimensional data. One popular way to implement sparse learning is to use L1 regularization. In this paper, we propose a novel method, called proximal SCOPE (pSCOPE), for distributed sparse learning with L1 regularization.







Test-TimeCollectivePrediction

Neural Information Processing Systems

An increasingly common setting in machine learning involves multiple parties, each with their own data, who want to jointly make predictions on future test points. Agents wish to benefit from the collective expertise of the full set of agents to make better predictions than they would individually, but may not be willing to release labeled data or model parameters.


Proximal SCOPE for Distributed Sparse Learning

Neural Information Processing Systems

Distributed sparse learning with a cluster of multiple machines has attracted much attention in machine learning, especially for large-scale applications with high-dimensional data. One popular way to implement sparse learning is to use L1 regularization. In this paper, we propose a novel method, called proximal SCOPE (pSCOPE), for distributed sparse learning with L1 regularization.



Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning

Neural Information Processing Systems

An important distinction of our work here from the standard FL literature is that the agents are self-interested and hence not necessarily cooperative like the worker nodes in distributed learning.